Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.
translated by 谷歌翻译
气道分割对于检查,诊断和预后的肺部疾病至关重要,而其手动描述则不当。为了减轻这种耗时且潜在的主观手动程序,研究人员提出了从计算机断层扫描(CT)图像自动分割气道的方法。但是,一些小型气道分支(例如,支气管和终末支气管)显着加剧了通过机器学习模型的自动分割难度。特别是,气道分支中体素值和严重的数据失衡的方差使计算模块容易导致不连续和假阴性预测。注意机制表明了分割复杂结构的能力,而模糊逻辑可以减少特征表示的不确定性。因此,由模糊注意力层给出的深度注意力网络和模糊理论的整合应该是升级的解决方案。本文提出了一种有效的气道分割方法,包括一个新型的模糊注意力神经网络和全面的损失函数,以增强气道分割的空间连续性。深层模糊集由特征图中的一组体素和可学习的高斯成员功能制定。与现有的注意机制不同,所提出的特异性模糊注意力解决了不同渠道中异质特征的问题。此外,提出了一种新的评估指标来评估气道结构的连续性和完整性。该方法的效率已通过在包括精确的09和LIDC数据集在内的开放数据集上进行测试,以及我们的内部Covid-19和纤维化肺病数据集证明了这一建议的效率。
translated by 谷歌翻译
卷积和复发性神经网络的结合是一个有希望的框架,它允许提取高质量时空特征以及其时间依赖性,这是时间序列预测问题(例如预测,分类或异常检测)的关键。在本文中,引入了TSFEDL库。它通过使用卷积和经常性的深神经网络来编译20种时间序列提取和预测的最先进方法,用于在多个数据挖掘任务中使用。该库是建立在AGPLV3许可下的一组TensorFlow+Keras和Pytorch模块上的。本提案中包含的架构的性能验证证实了此Python软件包的有用性。
translated by 谷歌翻译
我们提出了Fibernet,一种估计\ emph {in-Vivo}的方法,从电动激活的多个导管记录中,人心房的心脏纤维结构。心脏纤维在心脏的电力功能中起着核心作用,但是它们很难确定体内,因此在现有心脏模型中很少有特定于患者的特定于患者。 Fibernet通过解决物理知识的神经网络的逆问题来学习纤维布置。逆问题等于从一组稀疏激活图中识别心脏传播模型的传导速度张量。多个地图的使用可以同时识别传导速度张量(包括局部纤维角)的所有组件。我们对合成2-D和3-D示例,扩散张量纤维和患者特异性病例进行广泛测试。我们表明,在存在噪声的情况下,也足以准确捕获纤维。随着地图的较少,正则化的作用变得突出。此外,我们表明拟合的模型可以稳健地重现看不见的激活图。我们设想,纤维网将帮助创建特定于患者的个性化医学模型。完整代码可在http://github.com/fsahli/fibernet上找到。
translated by 谷歌翻译
由于更高的维度和困难的班级,机器学习应用中的可用数据变得越来越复杂。根据类重叠,可分离或边界形状,以及组形态,存在各种各样的方法来测量标记数据的复杂性。许多技术可以转换数据才能找到更好的功能,但很少专注于具体降低数据复杂性。大多数数据转换方法主要是治疗维度方面,撇开类标签中的可用信息,当类别在某种方式复杂时,可以有用。本文提出了一种基于AutoEncoder的复杂性减少方法,使用类标签来告知损耗函数关于所生成的变量的充分性。这导致了三个不同的新功能学习者,得分手,斯卡尔和切片机。它们基于Fisher的判别比率,Kullback-Leibler发散和最小二乘支持向量机。它们可以作为二进制分类问题应用作为预处理阶段。跨越27个数据集和一系列复杂性和分类指标的彻底实验表明,课堂上通知的AutoEncoders执行优于4个其他流行的无监督功能提取技术,特别是当最终目标使用数据进行分类任务时。
translated by 谷歌翻译
深度学习在各种任务中都优于其他机器学习算法,因此,它被广泛使用。但是,像其他机器学习算法,深度学习和卷积神经网络(CNN)一样,当数据集呈现标签噪声时,表现较差。因此,重要的是开发算法来帮助训练深网及其对无噪声测试集的概括。在本文中,我们提出了针对称为Rafni的标签噪声的强大训练策略,可与任何CNN一起使用。该算法过滤器和重新标记培训实例基于训练过程中主干神经网络的预测及其概率。这样,该算法会自行提高CNN的概括能力。拉夫尼(Rafni)由三种机制组成:两种过滤实例的机制和一种重新标记实例的机制。另外,它不认为噪声速率是已知的,也不需要估计。我们使用多种尺寸和特征的不同数据集评估了算法。我们还使用CIFAR10和CIFAR100基准在不同类型和标签噪声的速率下使用CIFAR10和CIFAR100基准进行了比较,发现Rafni在大多数情况下都能取得更好的结果。
translated by 谷歌翻译
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
translated by 谷歌翻译
Numerous works use word embedding-based metrics to quantify societal biases and stereotypes in texts. Recent studies have found that word embeddings can capture semantic similarity but may be affected by word frequency. In this work we study the effect of frequency when measuring female vs. male gender bias with word embedding-based bias quantification methods. We find that Skip-gram with negative sampling and GloVe tend to detect male bias in high frequency words, while GloVe tends to return female bias in low frequency words. We show these behaviors still exist when words are randomly shuffled. This proves that the frequency-based effect observed in unshuffled corpora stems from properties of the metric rather than from word associations. The effect is spurious and problematic since bias metrics should depend exclusively on word co-occurrences and not individual word frequencies. Finally, we compare these results with the ones obtained with an alternative metric based on Pointwise Mutual Information. We find that this metric does not show a clear dependence on frequency, even though it is slightly skewed towards male bias across all frequencies.
translated by 谷歌翻译
Reinforcement learning is a machine learning approach based on behavioral psychology. It is focused on learning agents that can acquire knowledge and learn to carry out new tasks by interacting with the environment. However, a problem occurs when reinforcement learning is used in critical contexts where the users of the system need to have more information and reliability for the actions executed by an agent. In this regard, explainable reinforcement learning seeks to provide to an agent in training with methods in order to explain its behavior in such a way that users with no experience in machine learning could understand the agent's behavior. One of these is the memory-based explainable reinforcement learning method that is used to compute probabilities of success for each state-action pair using an episodic memory. In this work, we propose to make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks that need to be first addressed to solve a more complex task. The end goal is to verify if it is possible to provide to the agent the ability to explain its actions in the global task as well as in the sub-tasks. The results obtained showed that it is possible to use the memory-based method in hierarchical environments with high-level tasks and compute the probabilities of success to be used as a basis for explaining the agent's behavior.
translated by 谷歌翻译
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
translated by 谷歌翻译